-
Notifications
You must be signed in to change notification settings - Fork 725
OpenVINO Backend Build Updates #15650
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…stall_requirement.py
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15650
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 4 Unrelated FailuresAs of commit 32bda53 with merge base 054b15d ( NEW FAILURE - The following job has failed:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR updates the OpenVINO backend build infrastructure to improve flexibility and fix configuration issues. The changes remove the --minimal flag from Python builds to include example dependencies, introduce a new --cpp_runtime_llm flag for building with LLM support, and update documentation and configuration files accordingly.
- Refactored build script to use conditional LLM dependency inclusion via
--cpp_runtime_llmflag - Updated CMake configuration to use arrays for better argument handling and removed inline OpenVINO backend build in favor of
find_package - Updated documentation and requirements files to reflect new build options and fix parameter references
Reviewed Changes
Copilot reviewed 8 out of 10 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| backends/openvino/scripts/openvino_build.sh | Refactored build script with new --cpp_runtime_llm flag, removed build_llama_runner function, and restructured CMake argument handling |
| backends/openvino/README.md | Updated build documentation to reference new --cpp_runtime_llm flag instead of --llama_runner |
| backends/openvino/requirements.txt | Removed transformers dependency from requirements |
| examples/openvino/README.md | Added stable_diffusion directory structure to documentation |
| examples/openvino/llama/README.md | Updated instructions to use new --cpp_runtime_llm flag and provide explicit CMake commands for building llama runner |
| examples/openvino/stable_diffusion/README.md | Fixed line numbering formatting |
| examples/openvino/stable_diffusion/requirements.txt | Fixed line numbering formatting |
| examples/models/yolo12/README.md | Updated documentation to clarify pip install strategy and fix --video_input to --video_path parameter |
| examples/models/yolo12/requirements.txt | Updated ultralytics version and added opencv-python dependency |
| examples/models/yolo12/CMakeLists.txt | Changed default OpenVINO backend option to OFF, updated XNNPACK library name, and simplified OpenVINO integration to use find_package |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| ) | ||
|
|
||
| if [[ "$llm_enabled" -eq 1 ]]; then | ||
| CMAKE_ARGS+=("-DEXECUTORCH_BUILD_EXTENSION_LLM=ON -DEXECUTORCH_BUILD_EXTENSION_LLM_RUNNER=ON") |
Copilot
AI
Nov 17, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The CMAKE_ARGS array elements are incorrectly combined into a single string. The two options -DEXECUTORCH_BUILD_EXTENSION_LLM=ON and -DEXECUTORCH_BUILD_EXTENSION_LLM_RUNNER=ON should be added as separate array elements, not as a single string.
Change:
CMAKE_ARGS+=("-DEXECUTORCH_BUILD_EXTENSION_LLM=ON -DEXECUTORCH_BUILD_EXTENSION_LLM_RUNNER=ON")To:
CMAKE_ARGS+=("-DEXECUTORCH_BUILD_EXTENSION_LLM=ON" "-DEXECUTORCH_BUILD_EXTENSION_LLM_RUNNER=ON")| CMAKE_ARGS+=("-DEXECUTORCH_BUILD_EXTENSION_LLM=ON -DEXECUTORCH_BUILD_EXTENSION_LLM_RUNNER=ON") | |
| CMAKE_ARGS+=("-DEXECUTORCH_BUILD_EXTENSION_LLM=ON" "-DEXECUTORCH_BUILD_EXTENSION_LLM_RUNNER=ON") |
| ) | ||
|
|
||
| if [[ "$llm_enabled" -eq 1 ]]; then | ||
| CMAKE_ARGS+=("-DEXECUTORCH_BUILD_EXTENSION_LLM=ON -DEXECUTORCH_BUILD_EXTENSION_LLM_RUNNER=ON") |
Copilot
AI
Nov 17, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line uses a tab character for indentation instead of spaces, which is inconsistent with the rest of the file. Replace the tab with spaces to maintain consistent indentation.
| CMAKE_ARGS+=("-DEXECUTORCH_BUILD_EXTENSION_LLM=ON -DEXECUTORCH_BUILD_EXTENSION_LLM_RUNNER=ON") | |
| CMAKE_ARGS+=("-DEXECUTORCH_BUILD_EXTENSION_LLM=ON -DEXECUTORCH_BUILD_EXTENSION_LLM_RUNNER=ON") |
Summary
Test plan
This PR doesn't include new functionality which requires testing.
cc: @mergennachin @cbilgin @digantdesai @ynimmaga @suryasidd @daniil-lyakhov @anzr299